EAGAN: Efficient Two-Stage Evolutionary Architecture Search for GANs

نویسندگان

چکیده

Generative adversarial networks (GANs) have proven successful in image generation tasks. However, GAN training is inherently unstable. Although many works try to stabilize it by manually modifying architecture, requires much expertise. Neural architecture search (NAS) has become an attractive solution GANs automatically. The early NAS-GANs only generators reduce complexity but lead a sub-optimal GAN. Some recent both generator (G) and discriminator (D), they suffer from the instability of training. To alleviate instability, we propose efficient two-stage evolutionary algorithm-based NAS framework GANs, namely EAGAN. We decouple G D into two stages, where stage-1 searches with fixed adopts many-to-one strategy, stage-2 optimal found one-to-one weight-resetting strategies enhance stability Both stages use non-dominated sorting method produce Pareto-front architectures under multiple objectives (e.g., model size, Inception Score (IS), Fréchet Distance (FID)). EAGAN applied unconditional task can efficiently finish on CIFAR-10 dataset 1.2 GPU days. Our searched achieve competitive results (IS = 8.81 ± 0.10, FID 9.91) surpass prior STL-10 10.44 0.087, 22.18). Source code: https://github.com/marsggbo/EAGAN .

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Efficient Two Stage Voting Architecture for Pairwise Multi-label Classification

A common approach for solving multi-label classification problems using problem-transformation methods and dichotomizing classifiers is the pair-wise decomposition strategy. One of the problems with this approach is the need for querying a quadratic number of binary classifiers for making a prediction that can be quite time consuming especially in classification problems with large number of la...

متن کامل

Evolutionary Architecture Search For Deep Multitask Networks

Multitask learning, i.e. learning several tasks at once with the same neural network, can improve performance in each of the tasks. Designing deep neural network architectures for multitask learning is a challenge: There are many ways to tie the tasks together, and the design choices matter. The size and complexity of this problem exceeds human design ability, making it a compelling domain for ...

متن کامل

Hierarchical Representations for Efficient Architecture Search

We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our a...

متن کامل

Two stage architecture for multi-label learning

A common approach to solving multi-label learning problems is to use problem transformation methods and dichotomizing classifiers as in the pair-wise decomposition strategy. One of the problems with this strategy is the need for querying a quadratic number of binary classifiers for making a prediction that can be quite time consuming, especially in learning problems with a large number of label...

متن کامل

A Two-Stage Model for Expert Search

This paper is concerned with expert search, a search task where the user types a query representing a topic and the search system returns a ranked list of people who are considered experts on the topic. The system does this using the evidence existing in a document collection. We proposed a model for performing the task in our TREC 2005 submission, referred to as two-stage model. Since that, a ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2022

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-19787-1_3